From 1 - 10 / 69
  • Spatial interpolation methods for generating spatially continuous data from point locations of environmental variables are essential for ecosystem management and biodiversity conservation. They can be classified into three groups (Li and Heap 2008): 1) non-geostatistical methods (e.g., inverse distance weighting), 2) geostatistical methods (e.g., ordinary kriging: OK) and 3) combined methods (e.g. regression kriging). Machine learning methods, like random forest (RF) and support vector machine (SVM), have shown their robustness in data mining fields. However, they have not been applied to the spatial prediction of environmental variables (Li and Heap 2008). Given that none of the existing spatial interpolation methods is superior to the others, several questions remain, namely: 1) could machine learning methods be applied to the spatial prediction of environmental variables; 2) how reliable are their predictions; 3) could the combination of these methods with the existing interpolation methods improve the predictions; and 4) what contributes to their accuracy? To address these questions, we conducted a simulation experiment to compare the predictions of several methods for mud content on the southwest Australian marine margin. In this study, we discuss results derived from this experiment, visually examine the spatial predictions, and compare the results with the findings in previous publications. The outcomes of this study have both practical and theoretical importance and can be applied to the spatial prediction of a range of environmental variables for informed decision making in environmental management. This study reveals a new direction in and provides alternative methods for spatial interpolation in environmental sciences.

  • Geoscience Australia (GA) has an active research interest in using multibeam bathymetry, backscatter data and their derivatives together with geophysical data, sediment samples, biological specimens and underwater video/still footage to create seabed habitat maps. This allows GA to provide spatial information about the physical and biological character of the seabed to support management of the marine estate. The main advantage of using multibeam systems over other techniques is that they provide spatially continuous maps that can be used to relate to physical samples and video observations. Here we present results of a study that aims to reliably and repeatedly delineate hard and soft seabed substrates using bathymetry, backscatter and their derivatives. Two independent approaches to the analysis of multibeam data are tested: (i) a two-stage classification-based clustering method, based solely on acoustic backscatter angular response curves, is used to derive a substrate type map. (ii) a prediction-based classification is produced using the Random Forest method based on bathymetry, backscatter data and their derivatives, with support from video and sediment data. Data for the analysis were collected by Geoscience Australia and the Australian Institute of Marine Science on the Van Dieman Rise in the Timor Sea using RV Solander. The mapped area is characterised by carbonate banks, ridges and terraces that form hardground with patchy sediment cover, and valleys and plains covered by muddy sediment. Results from the clustering method of hard and soft seabed types yielded classification accuracies of 78 - 87% when evaluated against seabed types as observed in underwater video. The prediction-based approach achieved a classification accuracy of 92% based on 10-fold cross-validation. These results are consistent with the current state of knowledge on geoacoustics. Patterns associated with geomorphic facies and biological categories are also observed. These results demonstrate the utility of acoustic data to broadly and objectively characterise the seabed substrate and thereby inform our understanding of the distribution of key habitat types.

  • The information within this document and associated DVD is intended to assist emergency managers in tsunami planning and preparation activities. The Attorney General's Department (AGD) has supported Geoscience Australia (GA) in developing a range of products to support the understanding of tsunami hazard through the Australian Tsunami Warning System Project. The work reported here is intended to further build the capacity of the QLD State Government in developing inundation models for prioritised locations. Internally stored data /nas/cds/internal/hazard_events/sudden_onset_hazards/tsunami_inundation/gold_coast/gold_coast_tsunami_scenario_2009

  • In this study, we aim to identify the most appropriate methods for spatial interpolation of seabed sand content for the AEEZ using samples extracted on August 2010 from Geoscience Australia's Marine Samples Database. The predictive accuracy changes with methods, input secondary variables, model averaging, search window size and the study region but the choice of mtry. No single method performs best for all the tested scenarios. Of the 18 compared methods, RFIDS and RFOK are the most accurate methods in all three regions. Overall, of the 36 combinations of input secondary variables, methods and regions, RFIDS, 6RFIDS and RFOK were among the most accurate methods in all three regions. Model averaging further improved the prediction accuracy. The most accurate methods reduced the prediction error by up to 7%. RFOKRFIDS, with a search window size of 5, an mtry of 4 and more realistic predictions in comparison with the control, is recommended for predicting sand content across the AEEZ if a single method is required. This study provides suggestions and guidelines for improving the spatial interpolations of marine environmental data.

  • For the past decade, staff at Geoscience Australia (GA), Australia's Commonwealth Government geoscientific agency, have routinely performed 3D gravity and magnetic modelling as part of our geoscience investigations. For this work, we have used a number of different commercial software programs, all of which have been based on a Cartesian mesh spatial framework. These programs have come as executable files that were compiled to operate in a Windows environment on single core personal computers (PCs). In recent times, a number of factors have caused us to consider a new approach to this modelling work. The drivers for change include; 1) models with very large lateral extents where the effects of Earth curvature are a consideration, 2) a desire to ensure that the modelling of separate regions is carried out in a consistent and managed fashion, 3) migration of scientific computing to off-site High Performance Computing (HPC) facilities, and 4) development of virtual globe environments for integration and visualization of 3D spatial objects. Our response has been to do the following; 1) form a collaborative partnership with researchers at the Colorado School of Mines (CSM) and the China University of Geosciences (CUG) to develop software for spherical mesh modelling of gravity and magnetic data, 2) to ensure that we had access to the source code for any modelling software so that we could customize and compile it for the HPC environment of our choosing, 3) to learn about the different types of HPC environments, 4) to investigate which type of HPC environment would have the optimum mix of availability to us, compute resources, and architecture, and 5) to promote the in-house development of a virtual globe application that we make freely available, built on an open-source Eclipse Rich Client Platform (RCP) called `EarthSci' that in turn makes use of the NASA World Wind Software Development Kit (SDK) as the globe rendering engine.

  • Geoscience Australia (GA) has been acquiring both broadband and long-period magnetotelluric (MT) data over the last few years along deep seismic reflection survey lines across Australia, often in collaboration with the States/Territory geological surveys and the University of Adelaide. Recently, new three-dimensional (3D) inversion code has become available from Oregon State University. This code is parallelised and has been compiled on the NCI supercomputer at the Australian National University. Much of the structure of the Earth in the regions of the seismic surveys is complex and 3D, and MT data acquired along profiles in such regions are better imaged by using 3D code rather than 1D or 2D code. Preliminary conductivity models produced from the Youanmi MT survey in Western Australia correlate well with interpreted seismic structures and contain more geological information than previous 2D models. GA has commenced a program to re-model with the new code MT data previously acquired to provide more robust information on the conductivity structure of the shallow to deep Earth in the vicinity of the seismic transects.

  • The development of the Indian Ocean Tsunami Warning and mitigation System (IOTWS) has occurred rapidly over the past few years and there are now a number of centres that perform tsunami modelling within the Indian Ocean, both for risk assessment and for the provision of forecasts and warnings. The aim of this work is to determine to what extent event-specific tsunami forecasts from different numerical forecast systems differ. This will have implications for the inter-operability of the IOTWS. Forecasts from eight separate tsunami forecast systems are considered. Eight hypothetical earthquake scenarios within the Indian Ocean and ten output points at a range of depths were defined. Each forecast centre provided, where possible, time series of sea-level elevation for each of the scenarios at each location. Comparison of the resulting time series shows that the main details of the tsunami forecast, such as arrival times and characteristics of the leading waves are similar. However, there is considerable variability in the value of the maximum amplitude (hmax) for each event and on average, the standard deviation of hmax is approximately 70% of the mean. This variability is likely due to differences in the implementations of the forecast systems, such as different numerical models, specification of initial conditions, bathymetry datasets, etc. The results suggest that it is possible that tsunami forecasts and advisories from different centres for a particular event may conflict with each other. This represents the range of uncertainty that exists in the real-time situation.

  • Geoscience Australia has developed a number of open source risk models to estimate hazard, damage or financial loss to residential communities from natural hazards and is used to underpin disaster risk reduction activities. Two of these models will be discussed here: the Earthquake Risk Model (EQRM) and a hydrodynamic model call ANUGA, developed in collaboratoin with the ANU. Both models have been developed in Python using scientific and GIS packages such as Shapely, Numeric and SciPy. This presentation will outline key lessons learnt in developing scientific software in Python. Methods of maintaining and accessing code quality will be discussed (1) what makes a good unit test (2) how defects in the code were discovered quickly by being able to visualise the output data; and (3) how characterisation tests, which describe the actual behaviour of a system, are useful for finding unintended system changes. The challenges involved in optimising and parallelising Python code will also be presented. This is particularly important in scientific simulations as they use considerable computational resources and involve large data sets. This will be focus on: profiling; NumPyl using C code; and parallelisation of applications to run on clusters. Reduction of memory use by using a class to represent a group of items instead of a single item will also be discussed.

  • One of the important inputs to a probabilistic seismic hazard assessment is the expected rate at which earthquakes within the study region. The rate of earthquakes is a function of the rate at which the crust is being deformed, mostly by tectonic stresses. This paper will present two contrasting methods of estimating the strain rate at the scale of the Australian continent. The first method is based on statistically analysing the recently updated national earthquake catalogue, while the second uses a geodynamic model of the Australian plate and the forces that act upon it. For the first method, we show a couple of examples of the strain rates predicted across Australia using different statistical techniques. However no matter what method is used, the measurable seismic strain rates are typically in the range of 10-16s-1 to around 10-18s-1 depending on location. By contrast, the geodynamic model predicts a much more uniform strain rate of around 10-17s-1 across the continent. The level of uniformity of the true distribution of long term strain rate in Australia is likely to be somewhere between these two extremes. Neither estimate is consistent with the Australian plate being completely rigid and free from internal deformation (i.e. a strain rate of exactly zero). This paper will also give an overview of how this kind of work affects the national earthquake hazard map and how future high precision geodetic estimates of strain rate should help to reduce the uncertainty in this important parameter for probabilistic seismic hazard assessments.

  • A key component of Geoscience Australia's marine program involves developing products that contain spatial information about the seabed for Australia's marine jurisdiction. This spatial information is derived from sparse or unevenly distributed samples collected over a number of years using many different sampling methods. Spatial interpolation methods are used for generating spatially continuous information from the point samples. These methods are, however, often data- or even variable- specific and it is difficult to select an appropriate method for any given dataset. Machine learning methods, like random forest (RF) and support vector machine (SVM), have proven to be among the most accurate methods in disciplines such as bioinformatics and terrestrial ecology. However, they have been rarely previously applied to the spatial interpolation of environmental variables using point samples. To improve the accuracy of spatial interpolations to better represent the seabed environment for a variety of applications, including prediction of biodiversity and surrogacy research, Geoscience Australia has conducted two simulation experiments to compare the performance of 14 mathematical and statistical methods to predict seabed mud content for three regions (i.e., Southwest, North, Northeast) of Australia's marine jurisdiction Since 2008. This study confirms the effectiveness of applying machine learning methods to spatial data interpolation, especially in combination with OK or IDS, and also confirms the effectiveness of averaging the predictions of these combined methods. Moreover, an alternative source of methods for spatial interpolation of both marine and terrestrial environmental properties using point survey samples has been identified, with associated improvements in accuracy over commonly used methods.